Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPAW/3RN5TEE
Repositorysid.inpe.br/sibgrapi/2018/08.27.16.23
Last Update2018:08.27.16.57.45 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2018/08.27.16.23.42
Metadata Last Update2022:06.14.00.09.08 (UTC) administrator
DOI10.1109/SIBGRAPI.2018.00063
Citation KeyCavallariRibePont:2018:DoCrFe
TitleUnsupervised representation learning using convolutional and stacked auto-encoders: a domain and cross-domain feature space analysis
FormatOn-line
Year2018
Access Date2024, May 01
Number of Files1
Size1208 KiB
2. Context
Author1 Cavallari, Gabriel B.
2 Ribeiro, Leonardo S. F.
3 Ponti, Moacir A.
Affiliation1 USP
2 USP
3 USP
EditorRoss, Arun
Gastal, Eduardo S. L.
Jorge, Joaquim A.
Queiroz, Ricardo L. de
Minetto, Rodrigo
Sarkar, Sudeep
Papa, João Paulo
Oliveira, Manuel M.
Arbeláez, Pablo
Mery, Domingo
Oliveira, Maria Cristina Ferreira de
Spina, Thiago Vallin
Mendes, Caroline Mazetto
Costa, Henrique Sérgio Gutierrez
Mejail, Marta Estela
Geus, Klaus de
Scheer, Sergio
e-Mail Addressmoacir@icmc.usp.br
Conference NameConference on Graphics, Patterns and Images, 31 (SIBGRAPI)
Conference LocationFoz do Iguaçu, PR, Brazil
Date29 Oct.-1 Nov. 2018
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2018-08-27 16:57:45 :: moacir@icmc.usp.br -> administrator :: 2018
2022-06-14 00:09:08 :: administrator -> :: 2018
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
KeywordsDeep Learning
Representation learning
Feature extraction
Unsupervised feature learning
AbstractA feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2018 > Unsupervised representation learning...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Unsupervised representation learning...
doc Directory Contentaccess
source Directory Content
sibgrapi-2018_Analysis_of_cross_domain_unsupervised_learning.pdf 27/08/2018 13:23 1.2 MiB
agreement Directory Content
agreement.html 27/08/2018 13:23 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPAW/3RN5TEE
zipped data URLhttp://urlib.net/zip/8JMKD3MGPAW/3RN5TEE
Languageen
Target Filesibgrapi-2018_Analysis_of_cross_domain_unsupervised_learning.pdf
User Groupmoacir@icmc.usp.br
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPAW/3RPADUS
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2018/09.03.20.37 7
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume


Close